Learning Adaptive Random Features

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Kernels with Random Features

Randomized features provide a computationally efficient way to approximate kernel machines in machine learning tasks. However, such methods require a user-defined kernel as input. We extend the randomized-feature approach to the task of learning a kernel (via its associated random features). Specifically, we present an efficient optimization problem that learns a kernel in a supervised manner. ...

متن کامل

Learning with data adaptive features

The cost-complexity pruning generates nested subtrees and selects the best one. However, its computational cost is large since it uses hold-out sample or crossvalidation. On the other hand, the pruning algorithms based on posterior calculations such as BIC (MDL) and MEP are faster, but they sometimes produce too big or small trees to yield poor generalization errors. In this paper, we propose a...

متن کامل

Learning Relational Features with Backward Random Walks

The path ranking algorithm (PRA) has been recently proposed to address relational classification and retrieval tasks at large scale. We describe Cor-PRA, an enhanced system that can model a larger space of relational rules, including longer relational rules and a class of first order rules with constants, while maintaining scalability. We describe and test faster algorithms for searching for th...

متن کامل

Learning Relational Features with Backward Random Walks

A path learning algorithm (PRA) has been recently proposed that addresses link prediction tasks on heterogenous graphs using learned combinations of labeled paths. Unlike most statistical relational learning methods, this approach scales to large data sets. In this paper, we extend PRA is terms of expressive power, while maintaining its high scalability. Mainly, we propose to compute backward r...

متن کامل

Generalization Properties of Learning with Random Features

We study the generalization properties of regularized learning with random features in the statistical learning theory framework. We show that optimal learning errors can be achieved with a number of features smaller than the number of examples. As a byproduct, we also show that learning with random features can be seen as a form of regularization, rather than only a way to speed up computations.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence

سال: 2019

ISSN: 2374-3468,2159-5399

DOI: 10.1609/aaai.v33i01.33014229